926 research outputs found

    Theory of magnetism with temporal disorder applied to magnetically doped ZnO

    Full text link
    A dynamic model of the asymmetric Ising glass is presented: an Ising model with antiferromagnet bonds with probabilities q arranged at random in a ferromagnetic matrix. The dynamics is introduced by changing the arrangement of the antiferromagnetic bonds after n Monte Carlo steps but keeping the same value of q and spin configuration. In the region where there is a second order transition between the ferromagnetic and paramagnetic states the dynamic behaviour follows that expected for motional narrowing and reverts to the static behaviour only for large n. There is a different dynamic behaviour where there is a first order transition between the ferromagnetic and spin glass states where it shows no effects of motional narrowing. The implications of this are discussed. This model is devised to explain the properties of doped ZnO where the magnetisation is reduced when the exchange interactions change with time.Comment: Paper was presented at MMM 2008 and is accepted for publication in J.A.

    Quantum adiabatic machine learning by zooming into a region of the energy surface

    Get PDF
    Recent work has shown that quantum annealing for machine learning, referred to as QAML, can perform comparably to state-of-the-art machine learning methods with a specific application to Higgs boson classification. We propose QAML-Z, an algorithm that iteratively zooms in on a region of the energy surface by mapping the problem to a continuous space and sequentially applying quantum annealing to an augmented set of weak classifiers. Results on a programmable quantum annealer show that QAML-Z matches classical deep neural network performance at small training set sizes and reduces the performance margin between QAML and classical deep neural networks by almost 50% at large training set sizes, as measured by area under the receiver operating characteristic curve. The significant improvement of quantum annealing algorithms for machine learning and the use of a discrete quantum algorithm on a continuous optimization problem both opens a class of problems that can be solved by quantum annealers and suggests the approach in performance of near-term quantum machine learning towards classical benchmarks

    Orthogonal Gradient Descent for Continual Learning

    Get PDF
    Neural networks are achieving state of the art and sometimes super-human performance on learning tasks across a variety of domains. Whenever these problems require learning in a continual or sequential manner, however, neural networks suffer from the problem of catastrophic forgetting; they forget how to solve previous tasks after being trained on a new task, despite having the essential capacity to solve both tasks if they were trained on both simultaneously. In this paper, we propose to address this issue from a parameter space perspective and study an approach to restrict the direction of the gradient updates to avoid forgetting previously-learned data. We present the Orthogonal Gradient Descent (OGD) method, which accomplishes this goal by projecting the gradients from new tasks onto a subspace in which the neural network output on previous task does not change and the projected gradient is still in a useful direction for learning the new task. Our approach utilizes the high capacity of a neural network more efficiently and does not require storing the previously learned data that might raise privacy concerns. Experiments on common benchmarks reveal the effectiveness of the proposed OGD method

    Orthogonal Gradient Descent for Continual Learning

    Get PDF
    Neural networks are achieving state of the art and sometimes super-human performance on learning tasks across a variety of domains. Whenever these problems require learning in a continual or sequential manner, however, neural networks suffer from the problem of catastrophic forgetting; they forget how to solve previous tasks after being trained on a new task, despite having the essential capacity to solve both tasks if they were trained on both simultaneously. In this paper, we propose to address this issue from a parameter space perspective and study an approach to restrict the direction of the gradient updates to avoid forgetting previously-learned data. We present the Orthogonal Gradient Descent (OGD) method, which accomplishes this goal by projecting the gradients from new tasks onto a subspace in which the neural network output on previous task does not change and the projected gradient is still in a useful direction for learning the new task. Our approach utilizes the high capacity of a neural network more efficiently and does not require storing the previously learned data that might raise privacy concerns. Experiments on common benchmarks reveal the effectiveness of the proposed OGD method

    Quantum adiabatic machine learning by zooming into a region of the energy surface

    Get PDF
    Recent work has shown that quantum annealing for machine learning, referred to as QAML, can perform comparably to state-of-the-art machine learning methods with a specific application to Higgs boson classification. We propose QAML-Z, an algorithm that iteratively zooms in on a region of the energy surface by mapping the problem to a continuous space and sequentially applying quantum annealing to an augmented set of weak classifiers. Results on a programmable quantum annealer show that QAML-Z matches classical deep neural network performance at small training set sizes and reduces the performance margin between QAML and classical deep neural networks by almost 50% at large training set sizes, as measured by area under the receiver operating characteristic curve. The significant improvement of quantum annealing algorithms for machine learning and the use of a discrete quantum algorithm on a continuous optimization problem both opens a class of problems that can be solved by quantum annealers and suggests the approach in performance of near-term quantum machine learning towards classical benchmarks

    An assessment of the extent to which the contents of PROSPERO records meet the systematic review protocol reporting items in PRISMA-P [version 1; peer review: 2 approved]

    Get PDF
    Introduction PROSPERO is an international prospective register for systematic review protocols. Many of the registrations are the only available source of information about planned methods. This study investigated the extent to which records in PROSPERO contained the preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P). Methods A random sample of 439 single entry PROSPERO records of reviews of health interventions registered in 2018 was identified. Using a piloted list of 19 PRISMA-P items, divided into 63 elements, two researchers independently assessed the registration records. Where the information was present or not applicable to the review a score of 1 was assigned. Overall scores were calculated and comparisons made by stage of review at registration, whether or not a meta-analysis was planned and whether or not funding/sponsorship was reported. Results Some key methodological details such as eligibility criteria, were relatively frequently reported, but much of the information recommended in PRISMA-P was not stated in PROSPERO registrations. Considering the 19 items, the mean score was 4.8 (SD 1.8; median 4; range 2-11) and across all the assessed records only 25% (2081/8227) of the items were scored as reported. Considering the 63 elements, the mean score was 33.4 (SD 5.8; median 33; range 18-47) and overall, 53% (14,469/27,279) of the elements were assessed as reported. Reporting was more frequent for items required in PROSPERO than optional items. The planned comparisons showed no meaningful differences between groups. Conclusions PROSPERO provides reviewers with the opportunity to be transparent in their planned methods and demonstrate efforts to reduce bias. However, where the PROSPERO record is the only available source of a priori reporting, there is a significant shortfall in the items reported, compared to those recommended. This presents challenges in interpretation for those wishing to assess the validity of the final review
    • …
    corecore